Goto

Collaborating Authors

 auditing differentially private machine learning


A General Framework for Auditing Differentially Private Machine Learning

Neural Information Processing Systems

We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice. While previous works have taken steps toward evaluating privacy loss through poisoning attacks or membership inference, they have been tailored to specific models or have demonstrated low statistical power. Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations, combining improved privacy search and verification methods with a toolkit of influence-based poisoning attacks. We demonstrate significantly improved auditing power over previous approaches on a variety of models including logistic regression, Naive Bayes, and random forest. Our method can be used to detect privacy violations due to implementation errors or misuse. When violations are not present, it can aid in understanding the amount of information that can be leaked from a given dataset, algorithm, and privacy specification.

  auditing differentially private machine learning, general framework, name change, (4 more...)

Review for NeurIPS paper: Auditing Differentially Private Machine Learning: How Private is Private SGD?

Neural Information Processing Systems

Weaknesses: My biggest concern with this work is that I believe they oversell their result. Their variance based computation of singular vectors (and hence data poisoning attack) relies heavily on the fact that we can have a good understanding of variance, which is model dependence. It is easiest for logistic regression. I suspect that is the reason the paper looked at logistic regression. As the bound before the line 225 is not tight for many other learning task, I doubt that they would have such a large improvement.

  auditing differentially private machine learning, neurips paper, private sgd, (7 more...)

Review for NeurIPS paper: Auditing Differentially Private Machine Learning: How Private is Private SGD?

Neural Information Processing Systems

All three reviewers support acceptance of this paper. They agree that providing lower bounds on the privacy guarantees of DP-SGD is an important problem and that the paper makes significant headway on this problem. It would be worthwhile to incorporate changes to the camera ready version of the paper clarifying some of Reviewer 1's questions.


Auditing Differentially Private Machine Learning: How Private is Private SGD?

Neural Information Processing Systems

We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks. While previous work (Ma et al., arXiv 2019) proposed this connection between differential privacy and data poisoning as a defense against data poisoning, our use as a tool for understanding the privacy of a specific mechanism is new. More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy.

  auditing differentially private machine learning, privacy, private sgd, (2 more...)

A General Framework for Auditing Differentially Private Machine Learning

Neural Information Processing Systems

We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice. While previous works have taken steps toward evaluating privacy loss through poisoning attacks or membership inference, they have been tailored to specific models or have demonstrated low statistical power. Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations, combining improved privacy search and verification methods with a toolkit of influence-based poisoning attacks. We demonstrate significantly improved auditing power over previous approaches on a variety of models including logistic regression, Naive Bayes, and random forest. Our method can be used to detect privacy violations due to implementation errors or misuse.

  auditing differentially private machine learning, general framework, poisoning attack